Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot, generalized zero-shot and open set recognition using a unified framework. Specifically, we propose a weighted maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms. Distance constraints ensure that labeled samples are projected closer to their correct prototypes, in the embedding space, than to others. We illustrate that resulting model shows improvements in supervised, zero-shot, generalized zero-shot, and large open set recognition, with up to 310K class vocabulary on Animal with Attributes and ImageNet datasets.
translated by 谷歌翻译
Graph-based change point detection (CPD) play an irreplaceable role in discovering anomalous graphs in the time-varying network. While several techniques have been proposed to detect change points by identifying whether there is a significant difference between the target network and successive previous ones, they neglect the natural evolution of the network. In practice, real-world graphs such as social networks, traffic networks, and rating networks are constantly evolving over time. Considering this problem, we treat the problem as a prediction task and propose a novel CPD method for dynamic graphs via a latent evolution model. Our method focuses on learning the low-dimensional representations of networks and capturing the evolving patterns of these learned latent representations simultaneously. After having the evolving patterns, a prediction of the target network can be achieved. Then, we can detect the change points by comparing the prediction and the actual network by leveraging a trade-off strategy, which balances the importance between the prediction network and the normal graph pattern extracted from previous networks. Intensive experiments conducted on both synthetic and real-world datasets show the effectiveness and superiority of our model.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Current advances in recommender systems have been remarkably successful in optimizing immediate engagement. However, long-term user engagement, a more desirable performance metric, remains difficult to improve. Meanwhile, recent reinforcement learning (RL) algorithms have shown their effectiveness in a variety of long-term goal optimization tasks. For this reason, RL is widely considered as a promising framework for optimizing long-term user engagement in recommendation. Despite being a promising approach, the application of RL heavily relies on well-designed rewards, but designing rewards related to long-term user engagement is quite difficult. To mitigate the problem, we propose a novel paradigm, Preference-based Recommender systems (PrefRec), which allows RL recommender systems to learn from preferences about users' historical behaviors rather than explicitly defined rewards. Such preferences are easily accessible through techniques such as crowdsourcing, as they do not require any expert knowledge. With PrefRec, we can fully exploit the advantages of RL in optimizing long-term goals, while avoiding complex reward engineering. PrefRec uses the preferences to automatically train a reward function in an end-to-end manner. The reward function is then used to generate learning signals to train the recommendation policy. Furthermore, we design an effective optimization method for PrefRec, which uses an additional value function, expectile regression and reward model pre-training to improve the performance. Extensive experiments are conducted on a variety of long-term user engagement optimization tasks. The results show that PrefRec significantly outperforms previous state-of-the-art methods in all the tasks.
translated by 谷歌翻译
及时调整尝试更新预训练模型中的一些特定任务参数。它的性能与在语言理解和发电任务上的完整参数设置的微调相当。在这项工作中,我们研究了迅速调整神经文本检索器的问题。我们引入参数效率的及时调整,以调整跨内域,跨域和跨主题设置的文本检索。通过广泛的分析,我们表明该策略可以通过基于微调的检索方法来减轻两个问题 - 参数 - 信息和弱推广性。值得注意的是,它可以显着改善检索模型的零零弹性概括。通过仅更新模型参数的0.1%,及时调整策略可以帮助检索模型获得比所有参数更新的传统方法更好的概括性能。最后,为了促进回猎犬的跨主题概括性的研究,我们策划并发布了一个学术检索数据集,其中包含18K查询的87个主题,使其成为迄今为止特定于特定于主题的主题。
translated by 谷歌翻译
回归学习是经典的,是医学图像分析的基础。它为许多关键应用程序提供了连续的映射,例如属性估计,对象检测,分割和非刚性注册。但是,先前的研究主要以案例标准(如均方误差)为优化目标。他们忽略了非常重要的人口相关标准,这正是许多任务中的最终评估指标。在这项工作中,我们建议通过有关直接优化细粒相关损失的新型研究来重新审视经典回归任务。我们主要探索两个互补相关索引作为可学习的损失:Pearson线性相关(PLC)和Spearman等级相关性(SRC)。本文的贡献是两个折叠。首先,对于全球层面的PLC,我们提出了一项策略,以使其对异常值进行强大的态度并规范关键分布因素。这些努力显着稳定学习并扩大了PLC的功效。其次,对于本地级别的SRC,我们提出了一种粗到精细的方案,以减轻样品之间确切排名顺序的学习。具体而言,我们将样本排名的学习转换为样本之间相似关系的学习。我们在两个典型的超声图像回归任务上广泛验证了我们的方法,包括图像质量评估和生物措施测量。实验证明,通过直接优化相关性的细粒度指导,回归性能得到显着提高。我们提出的相关性损失是一般的,可以扩展到更重要的应用程序。
translated by 谷歌翻译
眼科医生已经使用眼底图像筛选和诊断眼病。然而,不同的设备和眼科医生对眼底图像的质量产生了大的变化。低质量(LQ)降级的眼底图像在临床筛查中容易导致不确定性,并且通常会增加误诊的风险。因此,真实的眼底图像恢复值得研究。不幸的是,到目前为止,这项任务尚未探索真正的临床基准。在本文中,我们研究了真正的临床眼底图像恢复问题。首先,我们建立一个临床数据集,真实的眼底(RF),包括120个低质量和高质量(HQ)图像对。然后,我们提出了一种新型的变压器的生成对抗网络(RFRMANER)来恢复临床眼底图像的实际降级。我们网络中的关键组件是基于窗口的自我关注块(WSAB),其捕获非本地自我相似性和远程依赖性。为了产生更明显的令人愉悦的结果,介绍了一种基于变压器的鉴别器。在我们的临床基准测试中的广泛实验表明,所提出的rformer显着优于最先进的(SOTA)方法。此外,诸如船舶分割和光盘/杯子检测之类的下游任务的实验表明我们所提出的rformer益处临床眼底图像分析和应用。将发布数据集,代码和模型。
translated by 谷歌翻译
可提供许多开源和商业恶意软件探测器。然而,这些工具的功效受到新的对抗性攻击的威胁,由此恶意软件试图使用例如机器学习技术来逃避检测。在这项工作中,我们设计了依赖于特征空间和问题空间操纵的对抗逃避攻击。它使用可扩展性导向特征选择来最大限度地通过识别影响检测的最关键的特征来最大限度地逃避。然后,我们将此攻击用作评估若干最先进的恶意软件探测器的基准。我们发现(i)最先进的恶意软件探测器容易受到简单的逃避策略,并且可以使用现成的技术轻松欺骗; (ii)特征空间操纵和问题空间混淆可以组合起来,以便在不需要对探测器的白色盒子理解的情况下实现逃避; (iii)我们可以使用解释性方法(例如,Shap)来指导特征操纵并解释攻击如何跨多个检测器传输。我们的调查结果阐明了当前恶意软件探测器的弱点,以及如何改善它们。
translated by 谷歌翻译
飞行脊椎动物表现出复杂的Wingbeat运动学。他们的专门的前肢允许机翼变形动作在他们的水平飞行过程中与拍打动作加上,以前的可传单仿生平台已经成功地应用了生物启发的翼形变形,但不能被变形耦合的翼展图案推动。由此促进了这一点,我们开发了一个生物启发型扑翼空中车辆(FWAV),题为Robofalcon,配备了一种新颖的机制来推动蝙蝠式的变形翅膀,表现出变形耦合的翼型模式,并整体管理吸引力航班。 Robofalcon的新机制允许在需要在需要操纵时耦合变形和拍打,并在需要操纵时去耦,产生双侧不对称下划作,提供高轧制敏捷性。蝙蝠式的变形翼设计在腕关节的半径周围的倾斜安装角,以模仿飞行脊椎动物的手腕浸湿效果。通过几种轧制机动飞行测试评估了Robofalcon的敏捷性,与飞行生物和当前拍打翼平台相比,我们展示了其性能良好的敏捷性能力。风洞测试表明,不对称下午的辊矩与拍打频率相关,腕部安装角可用于调谐静止飞行状态的攻击角度和提升 - 推力配置。我们认为,这项工作产生了一个良好的仿生平台,为变形耦合扑拍飞行提供了新的驱动策略。
translated by 谷歌翻译
公制学习旨在学习一个距离度量,以便在将不同的实例推开时将语义上相似的实例放在一起。许多现有方法考虑在特征空间中最大化或至少限制距离距离的距离,以分离相似和不同的实例对以保证其概括能力。在本文中,我们主张在输入空间中施加对抗边缘,以改善公制学习算法的概括和稳健性。我们首先表明,对抗边缘定义为训练实例与其最接近的对手示例之间的距离,它既考虑了特征空间中的距离差距以及指标和三重限制之间的相关性。接下来,为了增强实例扰动的鲁棒性,我们建议通过最大程度地减少称为扰动损失的新型损失函数来扩大对抗缘。提出的损失可以看作是数据依赖性的正规器,并轻松地插入任何现有的度量学习方法中。最后,我们表明扩大边缘通过使用算法鲁棒性的理论技术对概括能力有益。 16个数据集的实验结果证明了所提出的方法比现有的最新方法具有歧视精度和鲁棒性,以抵抗可能的噪声。
translated by 谷歌翻译